1. 04 Mar, 2024 1 commit
    • NielsRogge's avatar
      Add UDOP (#22940) · 836921fd
      NielsRogge authored
      
      
      * First draft
      
      * More improvements
      
      * More improvements
      
      * More fixes
      
      * Fix copies
      
      * More improvements
      
      * More fixes
      
      * More improvements
      
      * Convert checkpoint
      
      * More improvements, set up tests
      
      * Fix more tests
      
      * Add UdopModel
      
      * More improvements
      
      * Fix equivalence test
      
      * More fixes
      
      * Redesign model
      
      * Extend conversion script
      
      * Use real inputs for conversion script
      
      * Add image processor
      
      * Improve conversion script
      
      * Add UdopTokenizer
      
      * Add fast tokenizer
      
      * Add converter
      
      * Update README's
      
      * Add processor
      
      * Add fully fledged tokenizer
      
      * Add fast tokenizer
      
      * Use processor in conversion script
      
      * Add tokenizer tests
      
      * Fix one more test
      
      * Fix more tests
      
      * Fix tokenizer tests
      
      * Enable fast tokenizer tests
      
      * Fix more tests
      
      * Fix additional_special_tokens of fast tokenizer
      
      * Fix tokenizer tests
      
      * Fix more tests
      
      * Fix equivalence test
      
      * Rename image to pixel_values
      
      * Rename seg_data to bbox
      
      * More renamings
      
      * Remove vis_special_token
      
      * More improvements
      
      * Add docs
      
      * Fix copied from
      
      * Update slow tokenizer
      
      * Update fast tokenizer design
      
      * Make text input optional
      
      * Add first draft of processor tests
      
      * Fix more processor tests
      
      * Fix decoder_start_token_id
      
      * Fix test_initialization
      
      * Add integration test
      
      * More improvements
      
      * Improve processor, add test
      
      * Add more copied from
      
      * Add more copied from
      
      * Add more copied from
      
      * Add more copied from
      
      * Remove print statement
      
      * Update README and auto mapping
      
      * Delete files
      
      * Delete another file
      
      * Remove code
      
      * Fix test
      
      * Fix docs
      
      * Remove asserts
      
      * Add doc tests
      
      * Include UDOP in exotic model tests
      
      * Add expected tesseract decodings
      
      * Add sentencepiece
      
      * Use same design as T5
      
      * Add UdopEncoderModel
      
      * Add UdopEncoderModel to tests
      
      * More fixes
      
      * Fix fast tokenizer
      
      * Fix one more test
      
      * Remove parallelisable attribute
      
      * Fix copies
      
      * Remove legacy file
      
      * Copy from T5Tokenizer
      
      * Fix rebase
      
      * More fixes, copy from T5
      
      * More fixes
      
      * Fix init
      
      * Use ArthurZ/udop for tests
      
      * Make all model tests pass
      
      * Remove UdopForConditionalGeneration from auto mapping
      
      * Fix more tests
      
      * fixups
      
      * more fixups
      
      * fix the tokenizers
      
      * remove un-necessary changes
      
      * nits
      
      * nits
      
      * replace truncate_sequences_boxes with truncate_sequences for fix-copies
      
      * nit current path
      
      * add a test for input ids
      
      * ids that we should get taken from c9f7a32f57440d90ff79890270d376a1cc0acb68
      
      * nits converting
      
      * nits
      
      * apply ruff
      
      * nits
      
      * nits
      
      * style
      
      * fix slow order of addition
      
      * fix udop fast range as well
      
      * fixup
      
      * nits
      
      * Add docstrings
      
      * Fix gradient checkpointing
      
      * Update code examples
      
      * Skip tests
      
      * Update integration test
      
      * Address comment
      
      * Make fixup
      
      * Remove extra ids from tokenizer
      
      * Skip test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update year
      
      * Address comment
      
      * Address more comments
      
      * Address comments
      
      * Add copied from
      
      * Update CI
      
      * Rename script
      
      * Update model id
      
      * Add AddedToken, skip tests
      
      * Update CI
      
      * Fix doc tests
      
      * Do not use Tesseract for the doc tests
      
      * Remove kwargs
      
      * Add original inputs
      
      * Update casting
      
      * Fix doc test
      
      * Update question
      
      * Update question
      
      * Use LayoutLMv3ImageProcessor
      
      * Update organization
      
      * Improve docs
      
      * Update forward signature
      
      * Make images optional
      
      * Remove deprecated device argument
      
      * Add comment, add add_prefix_space
      
      * More improvements
      
      * Remove kwargs
      
      ---------
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      836921fd
  2. 14 Feb, 2024 1 commit
  3. 30 Jan, 2024 1 commit
    • amyeroberts's avatar
      [`Backbone`] Use `load_backbone` instead of `AutoBackbone.from_config` (#28661) · 2fa1c808
      amyeroberts authored
      * Enable instantiating model with pretrained backbone weights
      
      * Remove doc updates until changes made in modeling code
      
      * Use load_backbone instead
      
      * Add use_timm_backbone to the model configs
      
      * Add missing imports and arguments
      
      * Update docstrings
      
      * Make sure test is properly configured
      
      * Include recent DPT updates
      2fa1c808
  4. 23 Jan, 2024 1 commit
  5. 03 Jan, 2024 1 commit
    • Connor Henderson's avatar
      Add FastSpeech2Conformer (#23439) · d83ff5ee
      Connor Henderson authored
      * start - docs, SpeechT5 copy and rename
      
      * add relevant code from FastSpeech2 draft, have tests pass
      
      * make it an actual conformer, demo ex.
      
      * matching inference with original repo, includes debug code
      
      * refactor nn.Sequentials, start more desc. var names
      
      * more renaming
      
      * more renaming
      
      * vocoder scratchwork
      
      * matching vocoder outputs
      
      * hifigan vocoder conversion script
      
      * convert model script, rename some config vars
      
      * replace postnet with speecht5's implementation
      
      * passing common tests, file cleanup
      
      * expand testing, add output hidden states and attention
      
      * tokenizer + passing tokenizer tests
      
      * variety of updates and tests
      
      * g2p_en pckg setup
      
      * import structure edits
      
      * docstrings and cleanup
      
      * repo consistency
      
      * deps
      
      * small cleanup
      
      * forward signature param order
      
      * address comments except for masks and labels
      
      * address comments on attention_mask and labels
      
      * address second round of comments
      
      * remove old unneeded line
      
      * address comments part 1
      
      * address comments pt 2
      
      * rename auto mapping
      
      * fixes for failing tests
      
      * address comments part 3 (bart-like, train loss)
      
      * make style
      
      * pass config where possible
      
      * add forward method + tests to WithHifiGan model
      
      * make style
      
      * address arg passing and generate_speech comments
      
      * address Arthur comments
      
      * address Arthur comments pt2
      
      * lint  changes
      
      * Sanchit comment
      
      * add g2p-en to doctest deps
      
      * move up self.encoder
      
      * onnx compatible tensor method
      
      * fix is symbolic
      
      * fix paper url
      
      * move models to espnet org
      
      * make style
      
      * make fix-copies
      
      * update docstring
      
      * Arthur comments
      
      * update docstring w/ new updates
      
      * add model architecture images
      
      * header size
      
      * md wording update
      
      * make style
      d83ff5ee
  6. 30 Nov, 2023 1 commit
    • Yoach Lacombe's avatar
      Add SeamlessM4T v2 (#27779) · 29f1aee3
      Yoach Lacombe authored
      
      
      * add working convertion script
      
      * first non-working version of modeling code
      
      * update modeling code (working)
      
      * make style
      
      * make fix-copies
      
      * add config docstrings
      
      * add config to ignore docstrings formatage due to unconventional markdown
      
      * fix copies
      
      * fix generation num_return_sequences
      
      * enrich docs
      
      * add and fix tests beside integration tests
      
      * update integration tests
      
      * update repo id
      
      * add tie weights and make style
      
      * correct naming in .md
      
      * fix imports and so on
      
      * correct docstrings
      
      * fix fp16 speech forward
      
      * fix speechencoder attention
      
      * make style
      
      * fix copied from
      
      * rename SeamlessM4Tv2-v2 to SeamlessM4Tv2
      
      * Apply suggestions on configuration
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove useless public models
      
      * fix private models + better naming for T2U models
      
      * clean speech encoder relative position embeddings
      
      * refactor chunk attention
      
      * add docstrings to chunk attention method
      
      * improve naming and docstrings
      
      * rename some attention variables + add temperature sampling in T2U model
      
      * rename DOCSTRINGS variable names
      
      * make style + remove 2 useless config parameters
      
      * enrich model card
      
      * remove any attention_head reference + fix temperature in T2U
      
      * new fmt and make style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * rename spkr_id->speaker_id and change docstrings of get_char_input_ids
      
      * simplify v2attention
      
      * make style
      
      * Update seamless_m4t_v2.md
      
      * update code and tests with last update
      
      * update repo ids
      
      * fill article name, abstract andauthors
      
      * update not_doctested and slow_doc tests
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      29f1aee3
  7. 24 Nov, 2023 1 commit
  8. 23 Oct, 2023 1 commit
    • Yoach Lacombe's avatar
      Add Seamless M4T model (#25693) · cb45f71c
      Yoach Lacombe authored
      
      
      * first raw commit
      
      * still POC
      
      * tentative convert script
      
      * almost working speech encoder conversion scripts
      
      * intermediate code for encoder/decoders
      
      * add modeling code
      
      * first version of speech encoder
      
      * make style
      
      * add new adapter layer architecture
      
      * add adapter block
      
      * add first tentative config
      
      * add working speech encoder conversion
      
      * base model convert works now
      
      * make style
      
      * remove unnecessary classes
      
      * remove unecessary functions
      
      * add modeling code speech encoder
      
      * rework logics
      
      * forward pass of sub components work
      
      * add modeling codes
      
      * some config modifs and modeling code modifs
      
      * save WIP
      
      * new edits
      
      * same output speech encoder
      
      * correct attention mask
      
      * correct attention mask
      
      * fix generation
      
      * new generation logics
      
      * erase comments
      
      * make style
      
      * fix typo
      
      * add some descriptions
      
      * new state
      
      * clean imports
      
      * add tests
      
      * make style
      
      * make beam search and num_return_sequences>1 works
      
      * correct edge case issue
      
      * correct SeamlessM4TConformerSamePadLayer copied from
      
      * replace ACT2FN relu by nn.relu
      
      * remove unecessary return variable
      
      * move back a class
      
      * change name conformer_attention_mask ->conv_attention_mask
      
      * better nit code
      
      * add some Copied from statements
      
      * small nits
      
      * small nit in dict.get
      
      * rename t2u model -> conditionalgeneration
      
      * ongoing refactoring of structure
      
      * update models architecture
      
      * remove SeamlessM4TMultiModal classes
      
      * add tests
      
      * adapt tests
      
      * some non-working code for vocoder
      
      * add seamlessM4T vocoder
      
      * remove buggy line
      
      * fix some hifigan related bugs
      
      * remove hifigan specifc config
      
      * change
      
      * add WIP tokenization
      
      * add seamlessM4T working tokenzier
      
      * update tokenization
      
      * add tentative feature extractor
      
      * Update converting script
      
      * update working FE
      
      * refactor input_values -> input_features
      
      * update FE
      
      * changes in generation, tokenizer and modeling
      
      * make style and add t2u_decoder_input_ids
      
      * add intermediate outputs for ToSpeech models
      
      * add vocoder to speech models
      
      * update valueerror
      
      * update FE with languages
      
      * add vocoder convert
      
      * update config docstrings and names
      
      * update generation code and configuration
      
      * remove todos and update config.pad_token_id to generation_config.pad_token_id
      
      * move block vocoder
      
      * remove unecessary code and uniformize tospeech code
      
      * add feature extractor import
      
      * make style and fix some copies from
      
      * correct consistency + make fix-copies
      
      * add processor code
      
      * remove comments
      
      * add fast tokenizer support
      
      * correct pad_token_id in M4TModel
      
      * correct config
      
      * update tests and codes  + make style
      
      * make some suggested correstion - correct comments and change naming
      
      * rename some attributes
      
      * rename some attributes
      
      * remove unecessary sequential
      
      * remove option to use dur predictor
      
      * nit
      
      * refactor hifigan
      
      * replace normalize_mean and normalize_var with do_normalize + save lang ids to generation config
      
      * add tests
      
      * change tgt_lang logic
      
      * update generation ToSpeech
      
      * add support import SeamlessM4TProcessor
      
      * fix generate
      
      * make tests
      
      * update integration tests, add option to only return text and update tokenizer fast
      
      * fix wrong function call
      
      * update import and convert script
      
      * update integration tests + update repo id
      
      * correct paths and add first test
      
      * update how new attention masks are computed
      
      * update tests
      
      * take first care of batching in vocoder code
      
      * add batching with the vocoder
      
      * add waveform lengths to model outputs
      
      * make style
      
      * add generate kwargs + forward kwargs of M4TModel
      
      * add docstrings forward methods
      
      * reformate docstrings
      
      * add docstrings t2u model
      
      * add another round of modeling docstrings + reformate speaker_id -> spkr_id
      
      * make style
      
      * fix check_repo
      
      * make style
      
      * add seamlessm4t to toctree
      
      * correct check_config_attributes
      
      * write config docstrings + some modifs
      
      * make style
      
      * add docstrings tokenizer
      
      * add docstrings to processor, fe and tokenizers
      
      * make style
      
      * write first version of model docs
      
      * fix FE + correct FE test
      
      * fix tokenizer + add correct integration tests
      
      * fix most tokenization tests
      
      * make style
      
      * correct most processor test
      
      * add generation tests and fix num_return_sequences > 1
      
      * correct integration tests -still one left
      
      * make style
      
      * correct position embedding
      
      * change numbeams to 1
      
      * refactor some modeling code and correct one test
      
      * make style
      
      * correct typo
      
      * refactor intermediate fnn
      
      * refactor feedforward conformer
      
      * make style
      
      * remove comments
      
      * make style
      
      * fix tokenizer tests
      
      * make style
      
      * correct processor tests
      
      * make style
      
      * correct S2TT integration
      
      * Apply suggestions from Sanchit code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct typo
      
      * replace torch.nn->nn + make style
      
      * change Output naming (waveforms -> waveform) and ordering
      
      * nit renaming and formating
      
      * remove return None when not necessary
      
      * refactor SeamlessM4TConformerFeedForward
      
      * nit typo
      
      * remove almost copied from comments
      
      * add a copied from comment and remove an unecessary dropout
      
      * remove inputs_embeds from speechencoder
      
      * remove backward compatibiliy function
      
      * reformate class docstrings for a few components
      
      * remove unecessary methods
      
      * split over 2 lines smthg hard to read
      
      * make style
      
      * replace two steps offset by one step as suggested
      
      * nice typo
      
      * move warnings
      
      * remove useless lines from processor
      
      * make generation non-standard test more robusts
      
      * remove torch.inference_mode from tests
      
      * split integration tests
      
      * enrich md
      
      * rename control_symbol_vocoder_offset->vocoder_offset
      
      * clean convert file
      
      * remove tgt_lang and src_lang from FE
      
      * change generate docstring of ToText models
      
      * update generate docstring of tospeech models
      
      * unify how to deal withtext_decoder_input_ids
      
      * add default spkr_id
      
      * unify tgt_lang for t2u_model
      
      * simplify tgt_lang verification
      
      * remove a todo
      
      * change config docstring
      
      * make style
      
      * simplify t2u_tgt_lang_id
      
      * make style
      
      * enrich/correct comments
      
      * enrich .md
      
      * correct typo in docstrings
      
      * add torchaudio dependency
      
      * update tokenizer
      
      * make style and fix copies
      
      * modify SeamlessM4TConverter with new tokenizer behaviour
      
      * make style
      
      * correct small typo docs
      
      * fix import
      
      * update docs and add requirement to tests
      
      * add convert_fairseq2_to_hf in utils/not_doctested.txt
      
      * update FE
      
      * fix imports and make style
      
      * remove torchaudio in FE test
      
      * add seamless_m4t.md to utils/not_doctested.txt
      
      * nits and change the way docstring dataset is loaded
      
      * move checkpoints from ylacombe/ to facebook/ orga
      
      * refactor warning/error to be in the 119 line width limit
      
      * round overly precised floats
      
      * add stereo audio behaviour
      
      * refactor .md and make style
      
      * enrich docs with more precised architecture description
      
      * readd undocumented models
      
      * make fix-copies
      
      * apply some suggestions
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * correct bug from previous commit
      
      * refactor a parameter allowing to clean the code + some small nits
      
      * clean tokenizer
      
      * make style and fix
      
      * make style
      
      * clean tokenizers arguments
      
      * add precisions for some tests
      
      * move docs from not_tested to slow
      
      * modify tokenizer according to last comments
      
      * add copied from statements in tests
      
      * correct convert script
      
      * correct parameter docstring style
      
      * correct tokenization
      
      * correct multi gpus
      
      * make style
      
      * clean modeling code
      
      * make style
      
      * add copied from statements
      
      * add copied statements
      
      * add support with ASR pipeline
      
      * remove file added inadvertently
      
      * fix docstrings seamlessM4TModel
      
      * add seamlessM4TConfig to OBJECTS_TO_IGNORE due of unconventional markdown
      
      * add seamlessm4t to assisted generation ignored models
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      cb45f71c
  9. 18 Oct, 2023 1 commit
    • Pablo Montalvo's avatar
      Add fuyu model (#26911) · caa0ff0b
      Pablo Montalvo authored
      
      
      * initial commit
      
      * add processor, add fuyu naming
      
      * add draft processor
      
      * fix processor
      
      * remove dropout to fix loading of weights
      
      * add image processing fixes from Pedro
      
      * fix
      
      * fix processor
      
      * add basic processing fuyu test
      
      * add documentation and TODO
      
      * address comments, add tests, add doc
      
      * replace assert with torch asserts
      
      * add Mixins and fix tests
      
      * clean imports
      
      * add model tester, clean imports
      
      * fix embedding test
      
      * add updated tests from pre-release model
      
      * Processor: return input_ids used for inference
      
      * separate processing and model tests
      
      * relax test tolerance for embeddings
      
      * add test for logit comparison
      
      * make sure fuyu image processor is imported in the init
      
      * fix formattingh
      
      * more formatting issues
      
      * and more
      
      * fixups
      
      * remove some stuff
      
      * nits
      
      * update init
      
      * remove the fuyu file
      
      * Update integration test with release model
      
      * Update conversion script.
      
      The projection is not used, as confirmed by the authors.
      
      * improve geenration
      
      * Remove duplicate function
      
      * Trickle down patches to model call
      
      * processing fuyu updates
      
      * remove things
      
      * fix prepare_inputs_for_generation to fix generate()
      
      * remove model_input
      
      * update
      
      * add generation tests
      
      * nits
      
      * draft leverage automodel and autoconfig
      
      * nits
      
      * fix dtype patch
      
      * address comments, update READMEs and doc, include tests
      
      * add working processing test, remove refs to subsequences
      
      * add tests, remove Sequence classification
      
      * processing
      
      * update
      
      * update the conversion script
      
      * more processing cleanup
      
      * safe import
      
      * take out ModelTesterMixin for early release
      
      * more cl;eanup
      
      * more cleanup
      
      * more cleanup
      
      * and more
      
      * register a buffer
      
      * nits
      
      * add postprocessing of generate output
      
      * nits
      
      * updates
      
      * add one working test
      
      * fix test
      
      * make fixup works
      
      * fixup
      
      * Arthur's updates
      
      * nits
      
      * update
      
      * update
      
      * fix processor
      
      * update tests
      
      * passe more fixups
      
      * fix
      
      * nits
      
      * don't import torch
      
      * skip fuyu config for now
      
      * fixup done
      
      * fixup
      
      * update
      
      * oups
      
      * nits
      
      * Use input embeddings
      
      * no buffer
      
      * update
      
      * styling processing fuyu
      
      * fix test
      
      * update licence
      
      * protect torch import
      
      * fixup and update not doctested
      
      * kwargs should be passed
      
      * udpates
      
      * update the impofixuprts in the test
      
      * protect import
      
      * protecting imports
      
      * protect imports in type checking
      
      * add testing decorators
      
      * protect top level import structure
      
      * fix typo
      
      * fix check init
      
      * move requires_backend to functions
      
      * Imports
      
      * Protect types
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarLysandre <lysandre@huggingface.co>
      caa0ff0b
  10. 01 Sep, 2023 1 commit
    • Matthijs Hollemans's avatar
      add VITS model (#24085) · 4ece3b94
      Matthijs Hollemans authored
      
      
      * add VITS model
      
      * let's vits
      
      * finish TextEncoder (mostly)
      
      * rename VITS to Vits
      
      * add StochasticDurationPredictor
      
      * ads flow model
      
      * add generator
      
      * correctly set vocab size
      
      * add tokenizer
      
      * remove processor & feature extractor
      
      * add PosteriorEncoder
      
      * add missing weights to SDP
      
      * also convert LJSpeech and VCTK checkpoints
      
      * add training stuff in forward
      
      * add placeholder tests for tokenizer
      
      * add placeholder tests for model
      
      * starting cleanup
      
      * let the great renaming begin!
      
      * use config
      
      * global_conditioning
      
      * more cleaning
      
      * renaming variables
      
      * more renaming
      
      * more renaming
      
      * it never ends
      
      * reticulating the splines
      
      * more renaming
      
      * HiFi-GAN
      
      * doc strings for main model
      
      * fixup
      
      * fix-copies
      
      * don't make it a PreTrainedModel
      
      * fixup
      
      * rename config options
      
      * remove training logic from forward pass
      
      * simplify relative position
      
      * use actual checkpoint
      
      * style
      
      * PR review fixes
      
      * more review changes
      
      * fixup
      
      * more unit tests
      
      * fixup
      
      * fix doc test
      
      * add integration test
      
      * improve tokenizer tests
      
      * add tokenizer integration test
      
      * fix tests on GPU (gave OOM)
      
      * conversion script can handle repos from hub
      
      * add conversion script for all MMS-TTS checkpoints
      
      * automatically create a README for the converted checkpoint
      
      * small changes to config
      
      * push README to hub
      
      * only show uroman note for checkpoints that need it
      
      * remove conversion script because code formatting breaks the readme
      
      * make WaveNet layers configurable
      
      * rename variables
      
      * simplifying the math
      
      * output attentions and hidden states
      
      * remove VitsFlip in flow model
      
      * also got rid of the other flip
      
      * fix tests
      
      * rename more variables
      
      * rename tokenizer, add phonemization
      
      * raise error when phonemizer missing
      
      * re-order config docstrings to match method
      
      * change config naming
      
      * remove redundant str -> list
      
      * fix copyright: vits authors -> kakao enterprise
      
      * (mean, log_variances) -> (prior_mean, prior_log_variances)
      
      * if return dict -> if not return dict
      
      * speed -> speaking rate
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * update fused tanh sigmoid
      
      * reduce dims in tester
      
      * audio -> output_values
      
      * audio -> output_values in tuple out
      
      * fix return type
      
      * fix return type
      
      * make _unconstrained_rational_quadratic_spline a function
      
      * all nn's to accept a config
      
      * add spectro to output
      
      * move {speaking rate, noise scale, noise scale duration} to config
      
      * path -> attn_path
      
      * idxs -> valid idxs -> padded idxs
      
      * output values -> waveform
      
      * use config for attention
      
      * make generation work
      
      * harden integration test
      
      * add spectrogram to dict output
      
      * tokenizer refactor
      
      * make style
      
      * remove 'fake' padding token
      
      * harden tokenizer tests
      
      * ron norm test
      
      * fprop / save tests deterministic
      
      * move uroman to tokenizer as much as possible
      
      * better logger message
      
      * fix vivit imports
      
      * add uroman integration test
      
      * make style
      
      * up
      
      * matthijs -> sanchit-gandhi
      
      * fix tokenizer test
      
      * make fix-copies
      
      * fix dict comprehension
      
      * fix config tests
      
      * fix model tests
      
      * make outputs consistent with reverse/not reverse
      
      * fix key concat
      
      * more model details
      
      * add author
      
      * return dict
      
      * speaker error
      
      * labels error
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/vits/convert_original_checkpoint.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * remove uromanize
      
      * add docstrings
      
      * add docstrings for tokenizer
      
      * upper-case skip messages
      
      * fix return dict
      
      * style
      
      * finish tests
      
      * update checkpoints
      
      * make style
      
      * remove doctest file
      
      * revert
      
      * fix docstring
      
      * fix tokenizer
      
      * remove uroman integration test
      
      * add sampling rate
      
      * fix docs / docstrings
      
      * style
      
      * add sr to model output
      
      * fix outputs
      
      * style / copies
      
      * fix docstring
      
      * fix copies
      
      * remove sr from model outputs
      
      * Update utils/documentation_tests.txt
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add sr as allowed attr
      
      ---------
      Co-authored-by: default avatarsanchit-gandhi <sanchit@huggingface.co>
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      4ece3b94
  11. 21 Aug, 2023 1 commit
    • Susnato Dhar's avatar
      Add Pop2Piano (#21785) · 450a181d
      Susnato Dhar authored
      
      
      * init commit
      
      * config updated also some modeling
      
      * Processor and Model config combined
      
      * extraction pipeline(upto before spectogram & mel_conditioner) added but not properly tested
      
      * model loading successful!
      
      * feature extractor done!
      
      * FE can now be called from HF
      
      * postprocessing added in fe file
      
      * same as prev commit
      
      * Pop2PianoConfig doc done
      
      * cfg docs slightly changed
      
      * fe docs done
      
      * batched
      
      * batched working!
      
      * temp
      
      * v1
      
      * checking
      
      * trying to go with generate
      
      * with generate and model tests passed
      
      * before rebasing
      
      * .
      
      * tests done docs done remaining others & nits
      
      * nits
      
      * LogMelSpectogram shifted to FeatureExtractor
      
      * is_tf rmeoved from pop2piano/init
      
      * import solved
      
      * tokenization tests added
      
      * minor fixed regarding modeling_pop2piano
      
      * tokenizer changed to only return midi_object and other changes
      
      * Updated paper abstract(Camera-ready version) (#2)
      
      * more comments and nits
      
      * ruff changes
      
      * code quality fix
      
      * sg comments
      
      * t5 change added and rebased
      
      * comments except batching
      
      * batching done
      
      * comments
      
      * small doc fix
      
      * example removed from modeling
      
      * ckpt
      
      * forward it compatible with fe and generation done
      
      * comments
      
      * comments
      
      * code-quality fix(maybe)
      
      * ckpts changed
      
      * doc file changed from mdx to md
      
      * test fixes
      
      * tokenizer test fix
      
      * changes
      
      * nits done main changes remaining
      
      * code modified
      
      * Pop2PianoProcessor added with tests
      
      * other comments
      
      * added Pop2PianoProcessor to dummy_objects
      
      * added require_onnx to modeling file
      
      * changes
      
      * update .md file
      
      * remove extra line in index.md
      
      * back to the main index
      
      * added pop2piano to index
      
      * Added tokenizer.__call__ with valid args and batch_decode and aligned the processor part too
      
      * changes
      
      * added return types to 2 tokenizer methods
      
      * the PR build test might work now
      
      * added backends
      
      * PR build fix
      
      * vocab added
      
      * comments
      
      * refactored vocab into 1 file
      
      * added conversion script
      
      * comments
      
      * essentia version changed in .md
      
      * comments
      
      * more tokenizer tests added
      
      * minor fix
      
      * tests extended for outputs acc check
      
      * small fix
      
      ---------
      Co-authored-by: default avatarJongho Choi <sweetcocoa@snu.ac.kr>
      450a181d
  12. 18 Aug, 2023 1 commit
    • Stas Bekman's avatar
      new model: IDEFICS via HuggingFaceM4 (#24796) · 6c811a32
      Stas Bekman authored
      
      
      * rename
      
      * restore
      
      * mappings
      
      * unedited tests+docs
      
      * docs
      
      * fixes
      
      * fix auto-sync breakage
      
      * cleanup
      
      * wip
      
      * wip
      
      * add fetch_images
      
      * remove einops dependency
      
      * update
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * re-add
      
      * add batching
      
      * rework
      
      * fix
      
      * improve
      
      * add Leo as I am extending his work
      
      * cleanup
      
      * fix
      
      * cleanup
      
      * slow-test
      
      * fix
      
      * fix
      
      * fixes
      
      * deal with warning
      
      * rename modified llama classes
      
      * rework fetch_images
      
      * alternative implementation
      
      * cleanup
      
      * strict version
      
      * cleanup
      
      * [`IDEFICS`]聽Fix idefics ci (#25056)
      
      * Fix IDEFICS CI
      
      * fix test file
      
      * fixup
      
      * some changes to make tests pass
      
      * fix
      
      * fixup
      
      * Update src/transformers/models/idefics/configuration_idefics.py
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * remove compat checks
      
      * style
      
      * explain that Idefics is not for training from scratch
      
      * require pt>=2.0
      
      * fix idefics vision config (#25092)
      
      * fix idefics vision config
      
      * fixup
      
      * clean
      
      * Update src/transformers/models/idefics/configuration_idefics.py
      
      ---------
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * cleanup
      
      * style
      
      * cleanup
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * upcase
      
      * sequence of images
      
      * handle the case with no images
      
      * Update src/transformers/image_processing_utils.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * support pure lm take 2
      
      * support tokenizer options
      
      * parameterize num_channels
      
      * fix upcase
      
      * s|IdeficsForCausalLM|IdeficsForVisionText2Text|g
      
      * manual to one line
      
      * addressing review
      
      * unbreak
      
      * remove clip dependency
      
      * fix test
      
      * consistency
      
      * PIL import
      
      * Idefics prefix
      
      * Idefics prefix
      
      * hack to make tests work
      
      * style
      
      * fix
      
      * fix
      
      * revert
      
      * try/finally
      
      * cleanup
      
      * clean up
      
      * move
      
      * [`IDEFICS`] Fix idefics config refactor (#25149)
      
      * refactor config
      
      * nuke init weights
      
      * more refactor
      
      * oops
      
      * remove visual question answering pipeline support
      
      * Update src/transformers/models/idefics/clip.py
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      
      * cleanup
      
      * mv clip.py vision.py
      
      * tidyup
      
      ---------
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      Co-authored-by: default avatarStas Bekman <stas@stason.org>
      
      * fix
      
      * license
      
      * condition on pt
      
      * fix
      
      * style
      
      * fix
      
      * rm torchvision dependency, allow custom transforms
      
      * address review
      
      * rework device arg
      
      * add_eos_token
      
      * s/transforms/transform/
      
      * fix top level imports
      
      * fix return value
      
      * cleanup
      
      * cleanup
      
      * fix
      
      * style
      
      * license
      
      * license
      
      * Update src/transformers/models/idefics/image_processing_idefics.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add a wrapper to freeze vision layears
      
      * tidyup
      
      * use the correct std/mean settings
      
      * parameterize values from config
      
      * add tests/models/idefics/test_image_processing_idefics.py
      
      * add test_processor_idefics.py
      
      * cleanup
      
      * cleanups
      
      * fix
      
      * fix
      
      * move to the right group
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add perceiver config
      
      * reset
      
      * missing arg docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLeo Tronchon <leo.tronchon@gmail.com>
      
      * address review comments
      
      * inject automatic end of utterance tokens (#25218)
      
      * inject automatic end of utterance tokens
      
      * fix
      
      * fix
      
      * fix
      
      * rework to not use the config
      
      * not end_of_utterance_token at the end
      
      * Update src/transformers/models/idefics/processing_idefics.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * address review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/image_processing_utils.py
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      
      * [`Idefics`] add image_embeddings option in generate-related methods (#25442)
      
      * add image_embeddings option in generate-related methods
      
      * style
      
      * rename image_embeddings and allow perceiver embeddings precomputation
      
      * compute embeddings within generate
      
      * make is_encoder_decoder= True the default in config
      
      * nested if else fix
      
      * better triple check
      
      * switch if elif order for pixel values / img embeds
      
      * update model_kwargs perceiver only at the end
      
      * use _prepare_model_inputs instead of encoder_decoder logic
      
      * fix comment typo
      
      * fix config default for is_encoder_decoder
      
      * style
      
      * add typehints
      
      * precompute in forward
      
      * doc builder
      
      * style
      
      * pop instead of get image hidden states
      
      * Trigger CI
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix * + indentation + style
      
      * simplify a bit the use_resampler logic using comments
      
      * update diocstrings
      
      * Trigger CI
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix rebase changes
      
      * unbreak #25237 - to be fixed in follow up PRs
      
      * is_composition = False
      
      * no longer needed
      
      ---------
      Co-authored-by: default avatarleot13 <leo.tronchon@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      6c811a32
  13. 25 Jul, 2023 1 commit
  14. 20 Jul, 2023 1 commit
    • Tom Aarsen's avatar
      Deprecate unused OpenLlama architecture (#24922) · 79444f37
      Tom Aarsen authored
      * Resolve typo in check_repo.py
      
      * Specify encoding when opening modeling files
      
      * Deprecate the OpenLlama architecture
      
      * Add disclaimer pointing to Llama
      
      I'm open to different wordings here
      
      * Match the capitalisation of LLaMA
      79444f37
  15. 13 Jul, 2023 1 commit
  16. 07 Jul, 2023 1 commit
  17. 15 Jun, 2023 1 commit
  18. 14 Jun, 2023 1 commit
    • Matthijs Hollemans's avatar
      [WIP] add EnCodec model (#23655) · 0c3fdccf
      Matthijs Hollemans authored
      
      
      * boilerplate stuff
      
      * messing around with the feature extractor
      
      * fix feature extractor
      
      * unit tests for feature extractor
      
      * rename speech to audio
      
      * quick-and-dirty import of Meta's code
      
      * import weights (sort of)
      
      * cleaning up
      
      * more cleaning up
      
      * move encoder/decoder args into config
      
      * cleanup model
      
      * rename EnCodec -> Encodec
      
      * RVQ parameters in config
      
      * add slow test
      
      * add lstm init and test_init
      
      * Add save & load
      
      * finish EncodecModel
      
      * remove decoder_input_values as they are ont used anywhere (not removed from doc yet)
      
      * fix test feature extraction model name
      
      * Add better slow test
      
      * Fix tests
      
      * some fixup and cleaning
      
      * Improve further
      
      * cleaning up quantizer
      
      * fix up conversion script
      
      * test don't pass, _encode_fram does not work
      
      * update tests with output per encode and decode
      
      * more cleanup
      
      * rename _codebook
      
      * remove old config cruft
      
      * ratios & hop_length
      
      * use ModuleList instead of Sequential
      
      * clean up resnet block
      
      * update types
      
      * update tests
      
      * fixup
      
      * quick cleanup
      
      * fix padding
      
      * more styl,ing
      
      * add patrick feedback
      
      * fix copies
      
      * fixup
      
      * fix lstm
      
      * fix shape issues
      
      * fixup
      
      * rename conv layers
      
      * fixup
      
      * fix decoding
      
      * small conv refactoring
      
      * remove norm_params
      
      * simplify conv layers
      
      * rename conv layers
      
      * stuff
      
      * Clean up
      
      * Add padding logic
      
      use padding mask
      
      small conv refactoring
      
      remove norm_params
      
      simplify conv layers
      
      rename conv layers
      
      stuff
      
      add batched test
      
      update
      
      Clean up
      
      merge and update for padding
      
      fix padding
      
      fixup
      
      * clean up more
      
      * clean up more
      
      * More clean ups
      
      * cleanup convolutions
      
      * typo
      
      * fix typos
      
      * fixup
      
      * build PR doc?
      
      * start refactoring docstring
      
      * fix don't pad when no strid and chunk
      
      * update docstring
      
      * update docstring
      
      * nits
      
      * update going to lunch
      
      * update config and model
      
      * fix broken testse (becaue of the config changes)
      
      * fix scale computation
      
      * fixu[
      
      * only return dict if speciefied or if config returns it
      
      * remove todos
      
      * update defaults in config
      
      * update conversion script
      
      * fix doctest
      
      * more docstring + fixup
      
      * nits on batched_tests
      
      * more nits
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * update basxed on review
      
      * fix update
      
      * updaet tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fixup
      
      * add overlap and chunl_length_s
      
      * cleanup feature extraction
      
      * teste edge cases truncation and padding
      
      * correct processor values
      
      * update config encodec, nits
      
      * fix tests
      
      * fixup
      
      * fix 24Hz test
      
      * elle tests are green
      
      * fix fixup
      
      * Apply suggestions from code review
      
      * revert readme changes
      
      * fixup
      
      * add example
      
      * use facebook checkpoints
      
      * fix typo
      
      * no pipeline tests
      
      * use slef.pad everywhere we can
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * update based on review
      
      * update
      
      * update mdx
      
      * fix bug and tests
      
      * fixup
      
      * fix doctest
      
      * remove comment
      
      * more nits
      
      * add more coverage for `test_truncation_and_padding`
      
      * fixup
      
      * add last test
      
      * fix text
      
      * nits
      
      * Update tests/models/encodec/test_modeling_encodec.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * take care of the last comments
      
      * typo
      
      * fix test
      
      * nits
      
      * fixup
      
      * Update src/transformers/models/encodec/feature_extraction_encodec.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatararthur.zucker@gmail.com <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      0c3fdccf
  19. 06 Jun, 2023 1 commit
    • amyeroberts's avatar
      Add TimmBackbone model (#22619) · a717e031
      amyeroberts authored
      
      
      * Add test_backbone for convnext
      
      * Add TimmBackbone model
      
      * Add check for backbone type
      
      * Tidying up - config checks
      
      * Update convnextv2
      
      * Tidy up
      
      * Fix indices & clearer comment
      
      * Exceptions for config checks
      
      * Correclty update config for tests
      
      * Safer imports
      
      * Safer safer imports
      
      * Fix where decorators go
      
      * Update import logic and backbone tests
      
      * More import fixes
      
      * Fixup
      
      * Only import all_models if torch available
      
      * Fix kwarg updates in from_pretrained & main rebase
      
      * Tidy up
      
      * Add tests for AutoBackbone
      
      * Tidy up
      
      * Fix import error
      
      * Fix up
      
      * Install nattan in doc_test_job
      
      * Revert back to setting self._out_xxx directly
      
      * Bug fix - out_indices mapping from out_features
      
      * Fix tests
      
      * Dont accept output_loading_info for Timm models
      
      * Set out_xxx and don't remap
      
      * Use smaller checkpoint for test
      
      * Don't remap timm indices - check out_indices based on stage names
      
      * Skip test as it's n/a
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Cleaner imports / spelling is hard
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      a717e031
  20. 30 May, 2023 1 commit
  21. 07 Mar, 2023 1 commit
    • Eli Simhayev's avatar
      [Time-Series] informer model (#21099) · 8abe4930
      Eli Simhayev authored
      * added informer to gitignore
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * moved enc-dec init to InformerEncoder/Decoder init
      
      * added 'init_std' to config, now model init works!
      
      * WIP conversion script, and added code sources
      
      * WIP conversion script: loading original informer pth works
      
      * WIP conversion script: change defaults in the config
      
      * WIP conversion script: supporting Informer input embedding
      
      * WIP conversion script: added parameters for the informer embed
      
      * WIP conversion script: change dim_feedforward=2048
      
      * WIP conversion script: remove unused args for loading checkpoint
      
      * just cleaning up
      
      * DataEmbedding removed, after thinking with Kashif
      
      * working on forward pass
      
      * WIP forward pass: trying to establish working batch for forward pass
      
      * cleaning and finalizing
      
      * adding HF names and docs
      
      * init after cleaning works
      
      * WIP in tests
      
      * added docs for the informer specific args
      
      * fix style
      
      * undo change
      
      * cleaning informer, now need to work only enc-dec
      
      * initial enc-dec classes
      
      * added encoder and decoder
      
      * added todo
      
      * add todos for conv_layers
      
      * added decoder docs from vanilla
      
      * added encoder docs from vanilla
      
      * remove encoder decoder from the original informer
      
      * removed AttentionLayer from the original paper
      
      * removed TriangularCausalMask, same as decoder_attention_mask
      
      * initial sparse attention
      
      * use conv_layers
      
      * fixed test_config test
      
      * fix parenthesis when itearting zip(layers, conv_layers)
      
      * error found in prob attention, added sizes as comments
      
      * fix sizes
      
      * added proposal for q_reduce indexing, and remove unused
      
      * WIP ProbMask, and changed factor=2 for testing
      
      * remove unused libs for this PR for creating the env
      
      * fix checking the attn_weights.size() after bmm
      
      * Q_reduce: changed from torch.gather to simple slicing
      
      * WIP calculate final attn_output
      
      * finish adding v_aggregated, attn_output ready
      
      * changed tgt_len to u in attention_mask, need to fix the size error
      
      * comment attention_mask for encoder, and fix if cond for v_agg
      
      * added ProbMask support (wip), removed old original code
      
      * finished ProbMask 馃槂
      
      
      
      * Revert "remove unused libs for this PR for creating the env"
      
      This reverts commit 11a081e09e92771e51a5d2758d53a9afb59547f0.
      
      * fixes
      
      * make style
      
      * fix initial tests
      
      * fix more tests
      
      * dry
      
      * make style
      
      * remove unused files
      
      * style
      
      * added integration tests
      
      * fix num_static_real_features
      
      * fix header
      
      * remove unused function
      
      * fix example
      
      * fix docs
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/modeling_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixes for reviewer
      
      * use prediction_length from model
      
      * fix style
      
      * fixed informer.mdx
      
      * added to index
      
      * updated readme
      
      * undo
      
      * make fix-copies
      
      * typo
      
      * fix copy
      
      * added Informer to toctree
      
      * in order
      
      * fixed comments
      
      * remove unneeded new lines in docs
      
      * make static real and cat optional
      
      * fix use of distil conv layers
      
      * fixed integration test
      
      * added checkpoint for convlayer
      
      * make fix-copies
      
      * updated from time series model
      
      * make fix-copies
      
      * copy decoder
      
      * fix unit tests
      
      * updated scaling config
      
      * fix integration tests
      
      * IGNORE_NON_TESTED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * updated check configs
      
      * fix formatting
      
      * undo change from time series
      
      * prediction_length should not be None
      
      * aliign with the blog: prettify ProbSparse and change attention_factor  to sampling_factor
      
      * make style
      
      * make fix-copies
      
      * niels CR: update contributed by
      
      * niels CR: update configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: update kashif -> huggingface
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: `sampling_factor` only relevant when `attention_type`=prob
      
      * make style
      
      * fixed U_part: added multiplication by `L_Q`
      
      * fixed bug: remove `is not None` from `if config.distil`
      
      * fixed test: `decoder_seq_length` to `encoder_seq_length` in cross_attentions check
      
      * fix integration tests
      
      * updated model hub
      
      * do not shift as in training
      
      * undo
      
      * fix make-copies
      
      * make fix-copies
      
      * added `if prediction_length is None`
      
      * changed `ProbSparseAttention` to `InformerProbSparseAttention`
      
      * changed `V_sum` -> `v_mean_dim_time`
      
      * changed `ConvLayer` to `InformerConvLayer` and fixed `super()`
      
      * TimeSeriesTansformer->Informer in decoder's Copied from
      
      * more descriptive in ProbSparse
      
      * make style
      
      * fix coped from
      
      * Revert "added `if prediction_length is None`"
      
      This reverts commit b4cbddfa05e3bd739b79569cd3c3b89e316f2451.
      
      * fixed indent
      
      * use InformerSinusoidalPositionalEmbedding
      
      * make fix-style
      
      * fix from #21860
      
      * fix name
      
      * make fix-copies
      
      * use time series utils
      
      * fix dec num_heads
      
      * docstring
      
      * added time series util doc
      
      * _import_structure
      
      * formatting
      
      * changes from review
      
      * make style
      
      * fix docs
      
      * fix doc
      
      * removed NegativeLogLikelihood
      
      ---------
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      8abe4930
  22. 16 Feb, 2023 1 commit
  23. 15 Feb, 2023 1 commit
  24. 10 Feb, 2023 1 commit
    • Yih-Dar's avatar
      Remove more unused attributes in config classes (#21543) · b47a1674
      Yih-Dar authored
      
      
      * Remove unused decoder_layerdrop
      
      * Update SPECIAL_CASES_TO_ALLOW for MT5Config
      
      * Remove unused position_embedding_init_scale
      
      * Remove unused decoder_max_relative_position
      
      * Use unused decoder_max_relative_position
      
      * Remove unused init_std
      
      * Remove unused forgotten attributes
      
      * Remove unused patch_norm
      
      * Remove unused max_seq_len
      
      * Update SPECIAL_CASES_TO_ALLOW for OneFormerConfig
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      b47a1674
  25. 07 Feb, 2023 1 commit