philosophy.md 5.97 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Sylvain Gugger's avatar
Sylvain Gugger committed
15
16
17
18
19
20
-->

# Philosophy

馃 Transformers is an opinionated library built for:

21
22
23
- machine learning researchers and educators seeking to use, study or extend large-scale Transformers models.
- hands-on practitioners who want to fine-tune those models or serve them in production, or both.
- engineers who just want to download a pretrained model and use it to solve a given machine learning task.
Sylvain Gugger's avatar
Sylvain Gugger committed
24
25
26

The library was designed with two strong goals in mind:

27
1. Be as easy and fast to use as possible:
Sylvain Gugger's avatar
Sylvain Gugger committed
28
29
30

  - We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions,
    just three standard classes required to use each model: [configuration](main_classes/configuration),
31
    [models](main_classes/model), and a preprocessing class ([tokenizer](main_classes/tokenizer) for NLP, [image processor](main_classes/image_processor) for vision, [feature extractor](main_classes/feature_extractor) for audio, and [processor](main_classes/processors) for multimodal inputs).
Sylvain Gugger's avatar
Sylvain Gugger committed
32
  - All of these classes can be initialized in a simple and unified way from pretrained instances by using a common
33
34
    `from_pretrained()` method which downloads (if needed), caches and
    loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary,
Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
    and models' weights) from a pretrained checkpoint provided on [Hugging Face Hub](https://huggingface.co/models) or your own saved checkpoint.
  - On top of those three base classes, the library provides two APIs: [`pipeline`] for quickly
37
    using a model for inference on a given task and [`Trainer`] to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with `Keras.fit`).
Sylvain Gugger's avatar
Sylvain Gugger committed
38
  - As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to
39
40
    extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base
    classes of the library to reuse functionalities like model loading and saving. If you'd like to learn more about our coding philosophy for models, check out our [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) blog post.
Sylvain Gugger's avatar
Sylvain Gugger committed
41

42
2. Provide state-of-the-art models with performances as close as possible to the original models:
Sylvain Gugger's avatar
Sylvain Gugger committed
43
44
45
46
47
48
49
50
51
52
53

  - We provide at least one example for each architecture which reproduces a result provided by the official authors
    of said architecture.
  - The code is usually as close to the original code base as possible which means some PyTorch code may be not as
    *pytorchic* as it could be as a result of being converted TensorFlow code and vice versa.

A few other goals:

- Expose the models' internals as consistently as possible:

  - We give access, using a single API, to the full hidden-states and attention weights.
54
  - The preprocessing classes and base model APIs are standardized to easily switch between models.
Sylvain Gugger's avatar
Sylvain Gugger committed
55

56
- Incorporate a subjective selection of promising tools for fine-tuning and investigating these models:
Sylvain Gugger's avatar
Sylvain Gugger committed
57

58
59
  - A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning.
  - Simple ways to mask and prune Transformer heads.
Sylvain Gugger's avatar
Sylvain Gugger committed
60

61
- Easily switch between PyTorch, TensorFlow 2.0 and Flax, allowing training with one framework and inference with another.
Sylvain Gugger's avatar
Sylvain Gugger committed
62
63
64
65
66

## Main concepts

The library is built around three types of classes for each model:

67
- **Model classes** can be PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) or JAX/Flax models ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)) that work with the pretrained weights provided in the library.
68
- **Configuration classes** store the hyperparameters required to build a model (such as the number of layers and hidden size). You don't always need to instantiate these yourself. In particular, if you are using a pretrained model without any modification, creating the model will automatically take care of instantiating the configuration (which is part of the model).
69
- **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embedding indices to be fed to a model. [Image processors](main_classes/image_processor) preprocess vision inputs, [feature extractors](main_classes/feature_extractor) preprocess audio inputs, and a [processor](main_classes/processors) handles multimodal inputs.
Sylvain Gugger's avatar
Sylvain Gugger committed
70

71
All these classes can be instantiated from pretrained instances, saved locally, and shared on the Hub with three methods:
Sylvain Gugger's avatar
Sylvain Gugger committed
72

73
- `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either
Sylvain Gugger's avatar
Sylvain Gugger committed
74
  provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or
75
76
  stored locally (or on a server) by the user.
- `save_pretrained()` lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using
Sylvain Gugger's avatar
Sylvain Gugger committed
77
  `from_pretrained()`.
78
- `push_to_hub()` lets you share a model, configuration, and a preprocessing class to the Hub, so it is easily accessible to everyone.
Sylvain Gugger's avatar
Sylvain Gugger committed
79